Aiming at the problems of insufficient generalization ability, poor stability and difficulty in meeting the real-time requirement of facial expression recognition, a real-time facial expression recognition method based on multi-scale kernel feature convolutional neural network was proposed. Firstly, an improved MSSD (MobileNet+Single Shot multiBox Detector) lightweight face detection network was proposed, and the detected face coordinates information was tracked by Kernel Correlation Filter (KCF) model to improve the detection speed and stability. Then, three linear bottlenecks of three different scale convolution kernels were used to form three branches. The multi-scale kernel convolution unit was formed by the feature fusion of channel combination, and the diversity feature was used to improve the accuracy of expression recognition. Finally, in order to improve the generalization ability of the model and prevent over-fitting, different linear transformation methods were used for data enhancement to augment the dataset, and the model trained on the FER-2013 facial expression dataset was migrated to the small sample CK+ dataset for retraining. The experimental results show that the recognition rate of the proposed method on the FER-2013 dataset reaches 73.0%, which is 1.8% higher than that of the Kaggle Expression Recognition Challenge champion, and the recognition rate of the proposed method on the CK+ dataset reaches 99.5%. For 640×480 video, the face detection speed of the proposed method reaches 158 frames per second, which is 6.3 times of that of the mainstream face detection network MTCNN (MultiTask Cascaded Convolutional Neural Network). At the same time, the overall speed of face detection and expression recognition of the proposed method reaches 78 frames per second. It can be seen that the proposed method can achieve fast and accurate facial expression recognition.
In order to solve problems that heterogeneous big data processing has low real-time response capability in Internet Of Things (IOT), data processinging and persistence schemes based on Hadoop were analyzed. A model of Hadoop big data processing system model based on "Context" named as HDS (Hadoop big Data processing System) was proposed. This model used Hadoop framework to complete data parallel process and persistence. Heterogeneous data were abstracted as "Context" which are the unified objects processed in HDS. Definitions of "Context Distance" and "Context Neighborhood System (CNS)" were proposed based on the "temporal-spatial" characteristics of "Context". "Context Queue (CQ)" was designed as an assistance storage so as to overcome defect of low real-time data processing response capability in Hadoop framework. Especially, based on temporal and spatial characteristics of context, optimization of task reorganizing in client requests CQ was introduced in detail. Finally, taken problem of vehicle scheduling in petroleum products distribution as an example, performance of data processing and real-time response capability were tested by MapReduce distributed parallel computing experiments. The experimental results show that compared with ordinary computing system SDS (Single Data processing System), HDS is not only of obviously excellence in big data processing capability but also can effectively overcome defect of low real-time data processing response of Hadoop. In 10-server experimental environment, the difference of data processinging capability between HDS and SDS is more than 200 times; the difference between HDS with and without assistance of CQ for real-time data processing response capability is more than 270 times.
To solve the problem of preserving detailed features of model about the traditional skeleton driven deformation, a method of subdivision-based skeleton-driven mesh deformation was proposed. Firstly,after that skeleton and control mesh were generated on deformed region, the relationship of between skeleton and control mesh, subdivision surface of control mesh and deformed region were established. Secondly, when the skeleton was modified according to the desired deformation result, the change information of the corresponding subdivision surface was transformed into the alteration of the mesh gradient field for Poisson. Some examples show that the deformation method for different mesh models could get better editing effects and preserve detailed features after the deformation effectively. Compared with the traditional skeleton-driven deformation method, it is proved to be easy to operate, and can be employed to preserve detailed features effectively. The method is suitable for editing the models with complex and rich geometric details.
Aiming at the fair and efficient bandwidths allocation for the geographically distributed control systems, a distributed and dynamic bandwidth allocation algorithm was proposed.Firstly, the bandwidth allocation problem was formulated as a convex optimization problem, namely, the sum of utilities of all the control systems was maximized. Further, the idea of the distributed bandwidth allocation algorithm was adopted to make the control systems vary their sampling periods based on fed-back congestion information from the network, and get the maximum sampling rate or maximum transmission rate which could be used. Then the interaction between control systems and links was modelled as a time-delay dynamical system, and Proportional-Integral (PI) controller was used as the link queue controller to realize the algorithm; The simulation results show that the proposed bandwidth allocation algorithm can not only make the transmission rates of all plants converge to the value where all plants share the bandwidth equally in 10 seconds. At the same time, for the PI controller, its queue stabilizes around the desired set point of 50 packets, and can accurately and steadily track the input signal to maximize the performance of all control systems.
The reliability of Embedded System Hardware (ESH) is very important, which is directly related to the quality and longevity of the embedded system. To analyze the reliability of ESH, it was studied on the perspective of hardware using Copula function. At first, abstract formalization of the ESH was defined from composition level. Then reliability modeling of each function module of the ESH was given by considering integration of hardware and software, as well as using Copulas function to establish the reliability model of ESH. Finally, the parameters of the proposed reliability model were estimated, and a specific calculation example by using this proposed model was put forward and compared with some other Copulas functions. The result shows that the proposed model using Copula function is effective.
Most of the current trajectory-based abnormal behavior detection algorithms do not consider the internal information of the trajectory, which might lead to a high false alarm rate. An abnormal behavior detection method based on trajectory segment using the topic model was presented. Firstly, the original trajectories were partitioned into trajectory segments according to turning angles. Secondly, the behavior characteristic information was extracted by quantifying the observations from these segments into different visual words. Then the time-space relationship among the trajectories was explored by Latent Dirichlet Allocation (LDA) model. Finally, the behavior pattern analysis and the abnormal behavior detection could be implemented by learning the corresponding generative topic model combined with the Bayesian theory. Simulation experiments of behavior pattern analysis and abnormal behavior detection were conducted on two video scenes, and different kinds of abnormal behavior patterns were detected. The experimental results show that, combining with trajectory segmentation, the proposed method can dig the internal behavior characteristic information to identify a variety of abnormal behavior patterns and improve the accuracy of abnormal behavior detection.
Aiming to overcome the problem that fuzzy or even inaccurate results will be concluded when evaluating the applicability of Dempster's combination rule while the bodies of evidence contain some non-singleton evidences which basic probability assignments have larger differences between any two bodies of evidence, a modified pignistic probability distance was proposed to describe the relevance between bodies of evidence. And then, combining the modified pignistic probability distance with the classical conflict coefficient, a new method of evaluating the applicability of Dempster's combination rule was presented. In the proposed method, a new conflict coefficient was defined to measure the conflict between bodies of evidence. The new conflict coefficient was consistent with the modified pignistic probability distance when the classical conflict coefficient was zero, and it was consistent with an average value of the modified pignistic probability distance and the classical conflict coefficient when the classical conflict coefficient was not zero. The results of the numerical analysis examples demonstrate that compared with the evaluating method based on the pignistic probability distance, the proposed method based on the improved pignistic probability distance can provide more applicable and reasonable evaluating results of the applicability of the Dempster's combination rule.
In order to investigate the cascading invulnerability attack strategy of complex network via community detection, the initial load of the node was defined by the betweenness of the node and its neighbors, this defining method comprehensively considered the information of the nodes, and the load on the broken nodes were redistributed to its neighbors according to the local preferential probability. When the network being intentionally attacked based on community detection, the couple strength, the invulnerability of Watts-Strogatz (WS) network, Barabási-Albert (BA) network, Erds-Rényi (ER) network and World-Local (WL) network, as well as network with overlapping and non-overlapping community under differet attack strategies were studied. The results show that the network's cascading invulnerability is negatively related with couple strength; as to different types of networks, under the premise that fast division algorithm correctly detects community structure, the networks invulnerability is lowest when the node with largest betweenness was attacked; after detecting overlapping community using the Clique Percolation Method (CPM), the network invulnerability is lowest when the overlapping node with largest betweenness was attacked. It comes to conclusion that the network will be largest destoryed when using the attack strategy of complex network via community detection.
In order to investigate the effects of community structure on cascading invulnerability, in the frame of a community structure network, the initial load of the node was defined by its betweenness, and the load on the broken node was redistributed to its neighboring nodes according to the preferential probability. When the node with the largest load being intentionally attacked in the network, the relation of load exponent, coupling-strength in a community, coupling-strength between communities, modularity function and the network's invulnerability were studied. The results show that the network's cascading invulnerability is positively related with coupling-strength in a community, coupling-strength between communities and modularity function, negatively related with load exponent. With comparison to BA (Barabási-Albert) scale-free network and WS (Watts-Strogatz) small-world networks, the result indicates that community structure lowers the network's cascading invulnerability, thus the more homogeneous betweenness distribution is, the stronger network's cascading invulnerability is.